Hierarchical Embeddings for Hypernymy Detection and Directionality

نویسندگان

  • Kim Anh Nguyen
  • Maximilian Köper
  • Sabine Schulte im Walde
  • Ngoc Thang Vu
چکیده

EXPERIMENTAL SETTINGS • ENCOW14A corpus ≈14.5 billion tokens. • Baseline: default SGNS (word2vec). • 100 dimensions, window 5, negative samples:15, learning rate 0.025. • Learn HyperVec for nouns and verbs. SUPERVISED CLASSIFICATION • SVM classifier based on four components: conc. + diff. + cos + magnitude(hyper) Models BLESS ENTAILMENT Yu et al. (2015) 0.90 0.87 Tuan et al. (2016) 0.93 0.91 HyperVec 0.94 0.91

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Cross-lingual Hypernymy Detection using Dependency Context

Cross-lingual Hypernymy Detection involves determining if a word in one language (“fruit”) is a hypernym of a word in another language (“pomme” i.e. apple in French). The ability to detect hypernymy cross-lingually can aid in solving cross-lingual versions of tasks such as textual entailment and event coreference. We propose BISPARSE-DEP, a family of unsupervised approaches for cross-lingual hy...

متن کامل

Learning Term Embeddings for Hypernymy Identification

Hypernymy identification aims at detecting if isA relationship holds between two words or phrases. Most previous methods are based on lexical patterns or the Distributional Inclusion Hypothesis, and the accuracy of such methods is not ideal. In this paper, we propose a simple yet effective supervision framework to identify hypernymy relations using distributed term representations (a.k.a term e...

متن کامل

How Well Can We Predict Hypernyms from Word Embeddings? A Dataset-Centric Analysis

One key property of word embeddings currently under study is their capacity to encode hypernymy. Previous works have used supervised models to recover hypernymy structures from embeddings. However, the overall results do not clearly show how well we can recover such structures. We conduct the first dataset-centric analysis that shows how only the Baroni dataset provides consistent results. We e...

متن کامل

Learning Hypernymy over Word Embeddings

Word embeddings have shown promise in a range of NLP tasks; however, it is currently difficult to accurately encode categorical lexical relations in these vector spaces. We consider one such important relation – hypernymy – and investigate the feasibility of learning a function in vector space to capture it. We argue that hypernymy is significantly harder to capture than the analogy tasks word ...

متن کامل

Specialising Word Vectors for Lexical Entailment

We present LEAR (Lexical Entailment Attract-Repel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymy...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017